deep-learning algorithm
AI tool scans faces to predict biological age and cancer survival
Fox News anchor Bret Baier has the latest on the Murdoch Children's Research Institute's partnership with the Gladstone Institutes for the'Decoding Broken Hearts' initiative on'Special Report.' A simple selfie could hold hidden clues to one's biological age -- and even how long they'll live. That's according to researchers from Mass General Brigham, who developed a deep-learning algorithm called FaceAge. Using a photo of someone's face, the artificial intelligence tool generates predictions of the subject's biological age, which is the rate at which they are aging as opposed to their chronological age. FaceAge also predicts survival outcomes for people with cancer, according to a press release from MGB.
Using deep learning to help distinguish dark matter from cosmic noise
Gravity makes dark matter clump into dense halos, indicated by bright patches, where galaxies form. In this simulation, a halo like the one that hosts the Milky Way forms and a smaller halo resembling the Large Magellanic Cloud falls toward it. SLAC and Stanford researchers, working with collaborators from the Dark Energy Survey, have used simulations like these to better understand the connection between dark matter and galaxy formation. Dark matter is the invisible force holding the universe together – or so we think. It makes up around 85% of all matter and around 27% of the universe's contents, but since we can't see it directly, we have to study its gravitational effects on galaxies and other cosmic structures.
On the Implicit Bias in Deep-Learning Algorithms
Deep learning has been highly successful in recent years and has led to dramatic improvements in multiple domains. Deep-learning algorithms often generalize quite well in practice, namely, given access to labeled training data, they return neural networks that correctly label unobserved test data. However, despite much research, our theoretical understanding of generalization in deep learning is still limited. Neural networks used in practice often have far more learnable parameters than training examples. In such overparameterized settings, one might expect overfitting to occur, that is, the learned network might perform well on the training dataset and perform poorly on test data. Indeed, in overparameterized settings, there are many solutions that perform well on the training data, but most of them do not generalize well.
How Artificial Intelligence Is Driving Changes in Radiology
Described simply, artificial intelligence (AI) is a field that combines computer science and robust data sets, to enable problem-solving. The umbrella term encompasses the subfields of machine learning and the more recently developed deep learning, which itself is a subfield of machine learning. Both use AI algorithms to create expert systems that make predictions or classifications based on input data. The first reports of AI use in radiology date back to 1992 when it was used to detect microcalcifications in mammography1 and was more commonly known as computer-aided detection. It wasn't until around the mid-2010s that it really started to be seen as a potential solution to the daily challenges, such as volume burden, faced by radiologists.
- North America > United States > Wisconsin > Dane County > Madison (0.04)
- North America > United States > Maryland > Montgomery County > Bethesda (0.04)
- Health & Medicine > Nuclear Medicine (1.00)
- Health & Medicine > Diagnostic Medicine > Imaging (1.00)
- Government > Regional Government > North America Government > United States Government (0.47)
Deep learning lets algorithm produce best solutions to molecules' Schrödinger equations yet
A new deep-learning algorithm from researchers in Austria produces more accurate numerical solutions to the Schrödinger equation than ever before for a number of different molecules at relatively modest computational cost. Surprisingly, the researchers found that, whereas some'pre-training' of the algorithm could improve its predictive abilities, more substantial training was actively harmful. As the Schrödinger equation can be solved analytically only for the hydrogen atom, researchers wishing to estimate energies of molecules are forced to rely on numerical methods. Simpler approximations such as density functional theory and the Hartree-Fock method, which is almost as old as the Schrödinger equation itself, can treat far-larger systems but often gives inaccurate results. Newer techniques such as complete active space self-consistent field (CASSCF) give results closer to experiments, but require much more computation.
Will AI systems replace humanities professors?
There has been much hand-wringing about the crisis of the humanities, and recent breakthroughs in artificial intelligence have added to the angst. It is not only truck drivers whose jobs are threatened by automation. Deep-learning algorithms are also entering the domain of creative work. And now, they are demonstrating proficiency in the tasks that occupy humanities professors when they are not giving lectures: namely, writing papers and submitting them for publication in academic journals. Could academic publishing be automated?
- Oceania > New Zealand > North Island > Waikato (0.05)
- Oceania > Australia (0.05)
Artificial Intelligence Needs Both Pragmatists and Blue-Sky Visionaries
Artificial intelligence thinkers seem to emerge from two communities. One is what I call blue-sky visionaries who speculate about the future possibilities of the technology, invoking utopian fantasies to generate excitement. Blue-sky ideas are compelling but are often clouded over by unrealistic visions and the ethical challenges of what can and should be built. In contrast, what I call muddy-boots pragmatists are problem- and solution-focused. They want to reduce the harms that widely used AI-infused systems can create.
- North America > United States (0.05)
- Europe > United Kingdom > England > Oxfordshire > Oxford (0.05)
- Information Technology > Artificial Intelligence > Robots (0.72)
- Information Technology > Artificial Intelligence > History (0.49)
Do our brains use the same kind of deep-learning algorithms used in AI?
Deep-learning researchers have found that certain neurons in the brain have shape and electrical properties that appear to be well-suited for "deep learning" -- the kind of machine-intelligence used in beating humans at Go and Chess. Canadian Institute For Advanced Research (CIFAR) Fellow Blake Richards and his colleagues -- Jordan Guerguiev at the University of Toronto, Scarborough, and Timothy Lillicrap at Google DeepMind -- developed an algorithm that simulates how a deep-learning network could work in our brains. It represents a biologically realistic way by which real brains could do deep learning.* The finding is detailed in a study published December 5th in the open-access journal eLife. Seeing the trees and the forest "Most of these neurons are shaped like trees, with'roots' deep in the brain and'branches' close to the surface," says Richards.
- North America > Canada > Ontario > Toronto (0.58)
- Asia > Middle East > Jordan (0.25)
- Europe > Germany (0.05)
Faced With A Data Deluge, Astronomers Turn To Automation - AI Summary
Specifically, Huerta and his then graduate student Daniel George pioneered the use of so-called convolutional neural networks (CNNs), which are a type of deep-learning algorithm, to detect and decipher gravitational-wave signals in real time. Roughly speaking, training or teaching a deep-learning system involves feeding it data that are already categorized--say, images of galaxies obscured by lots of noise--and getting the network to identify the patterns in the data correctly. After their initial success with CNNs, Huerta and George, along with Huerta's graduate student Hongyu Shen, scaled up this effort, designing deep-learning algorithms that were trained on supercomputers using millions of simulated signatures of gravitational waves mixed in with noise derived from previous observing runs of Advanced LIGO--an upgrade to LIGO completed in 2015. For instance, Adam Rebei, a high school student in Huerta's group, showed in a recent study that deep learning can identify the complex gravitational-wave signals produced by the merger of black holes in eccentric orbits--something LIGO's traditional algorithms cannot do in real time. In a preprint paper last September, Nicholas Choma of New York University and his colleagues reported the development of a special type of deep-learning algorithm called a graph neural network, whose connections and architecture take advantage of the spatial geometry of the sensors in the ice and the fact that only a few sensors see the light from any given muon track.
- North America > United States > New York (0.26)
- Africa > Zambia > Southern Province > Choma (0.26)
Multi-label emotion classification of Urdu tweets
Urdu is a widely used language in South Asia and worldwide. While there are similar datasets available in English, we created the first multi-label emotion dataset consisting of 6,043 tweets and six basic emotions in the Urdu Nastalíq script. A multi-label (ML) classification approach was adopted to detect emotions from Urdu. The morphological and syntactic structure of Urdu makes it a challenging problem for multi-label emotion detection. In this paper, we build a set of baseline classifiers such as machine learning algorithms (Random forest (RF), Decision tree (J48), Sequential minimal optimization (SMO), AdaBoostM1, and Bagging), deep-learning algorithms (Convolutional Neural Networks (1D-CNN), Long short-term memory (LSTM), and LSTM with CNN features) and transformer-based baseline (BERT). We used a combination of text representations: stylometric-based features, pre-trained word embedding, word-based n-grams, and character-based n-grams. The paper highlights the annotation guidelines, dataset characteristics and insights into different methodologies used for Urdu based emotion classification. We present our best results using micro-averaged F1, macro-averaged F1, accuracy, Hamming loss (HL) and exact match (EM) for all tested methods.